Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Elkins, Noah; Hayes, Bruce; Jo, Jinyoung; Siah, Jian-Leat (Ed.)
- 
            This paper discusses gaps in stress typology that are unexpected from the perspective of a foot-based theory and shows that the patterns pose difficulties for a computationally implemented learning algorithm. The unattested patterns result from combining theoretical elements whose effects are generally well-attested, including iambic footing, nonfinality, word edge alignment and a foot binarity requirement. The patterns can be found amongst the 124 target stress systems constructed by Tesar and Smolensky (2000) as a test of their approach to hidden structure learning. A learner with a Maximum Entropy grammar that uses a form of Expectation Maximization to deal with hidden structure was found to often fail on these unattested languages.more » « less
- 
            Reduplicative linguistic patterns have been used as evidence for explicit algebraic variables in models of cognition.1 Here, we show that a variable-free neural network can model these patterns in a way that predicts observed human behavior. Specifically, we successfully simulate the three experiments presented by Marcus et al. (1999), as well as Endress et al.’s (2007) partial replication of one of those experiments. We then explore the model’s ability to generalize reduplicative mappings to different kinds of novel inputs. Using Berent’s (2013) scopes of generalization as a metric, we claim that the model matches the scope of generalization that has been observed in humans. We argue that these results challenge past claims about the necessity of symbolic variables in models of cognition.more » « less
- 
            Ettinger, Allyson; Hunter, Tim; Prickett, Brandon (Ed.)We present the first application of modern neural networks to the well studied task of learning word stress systems. We tested our adaptation of a sequence-to-sequence network on the Tesar and Smolensky test set of 124 “languages”, showing that it acquires generalizable representations of stress patterns in a very high proportion of runs. We also show that it learns restricted lexically conditioned patterns, known as stress windows. The ability of this model to acquire lexical idiosyncracies, which are very common in natural language systems, sets it apart from past, non-neural models tested on the Tesar and Smolensky data set.more » « less
- 
            The experimental study of artificial language learning has become a widely used means of investigating the predictions of theories of language learning and representation. Although much is now known about the generalizations that learners make from various kinds of data, relatively little is known about how those representations affect speech processing. This paper presents an event-related potential (ERP) study of brain responses to violations of lab-learned phonotactics. Novel words that violated a learned phonotactic constraint elicited a larger Late Positive Component (LPC) than novel words that satisfied it. Similar LPCs have been found for violations of natively acquired linguistic structure, as well as for violations of other types of abstract generalizations, such as musical structure. We argue that lab-learned phonotactic generalizations are represented abstractly and affect the evaluation of speech in a manner that is similar to natively acquired syntactic and phonological rules.more » « less
- 
            null (Ed.)Reduplication is common, but analogous reversal processes are rare, even though reversal, which involves nested rather than crossed dependencies, is less complex on the Chomsky hierarchy. We hypothesize that the explanation is that repetitions can be recognized when they match and reactivate a stored trace in short-term memory, but recognizing a reversal requires rearranging the input in working memory before attempting to match it to the stored trace. Repetitions can thus be recognized, and repetition patterns learned, implicitly, whereas reversals require explicit, conscious awareness. To test these hypotheses, participants were trained to recognize either a reduplication or a syllable-reversal pattern, and then asked to state the rule. In two experiments, above-chance classification performance on the Reversal pattern was confined to Correct Staters, whereas above-chance performance on the Reduplication pattern was found with or without correct rule-stating. Final proportion correct was positively correlated with final response time for the Reversal Correct Staters but no other group. These results support the hypothesis that reversal, unlike reduplication, requires conscious, time-consuming computation.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available